GLT-T: Global-Local Transformer Voting for 3D Single Object Tracking in Point Clouds

نویسندگان

چکیده

Current 3D single object tracking methods are typically based on VoteNet, a region proposal network. Despite the success, using seed point feature as cue for offset learning in VoteNet prevents high-quality proposals from being generated. Moreover, points with different importance treated equally voting process, aggravating this defect. To address these issues, we propose novel global-local transformer scheme to provide more informative cues and guide model pay attention potential points, promoting generation of proposals. Technically, (GLT) module is employed integrate object- patch-aware prior into features effectively form strong representation geometric positions thus providing robust accurate learning. Subsequently, simple yet effective training strategy designed train GLT module. We develop an prediction branch learn treat output weights vector constraint term. By incorporating above components together, exhibit superior method GLT-T. Extensive experiments challenging KITTI NuScenes benchmarks demonstrate that GLT-T achieves state-of-the-art performance task. Besides, further ablation studies show advantages proposed over original VoteNet. Code models will be available at https://github.com/haooozi/GLT-T.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Implicit Shape Models for Object Detection in 3d Point Clouds

We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved v...

متن کامل

LocNet: Global localization in 3D point clouds for mobile robots

Global localization in 3D point clouds is a challenging problem of estimating the pose of robots without priori knowledge. In this paper, a solution to this problem is presented by achieving place recognition and metric pose estimation in the global priori map. Specifically, we present a semi-handcrafted representation learning method for LIDAR point cloud using siamese LocNets, which states th...

متن کامل

Local 3D Symmetry for Visual Saliency in 2.5D Point Clouds

Many models of visual attention have been proposed in the past, and proved to be very useful, e.g. in robotic applications. Recently it has been shown in the literature that not only single visual features, such as color, orientation, curvature, etc., attract attention, but complete objects do. Symmetry is a feature of many man-made and also natural objects and has thus been identified as a can...

متن کامل

People Detection in 3d Point Clouds Using Local Surface Normals

The ability to detect people in domestic and unconstrained environments is crucial for every service robot. The knowledge where people are is required to perform several tasks such as navigation with dynamic obstacle avoidance and human-robot-interaction. In this paper we propose a people detection approach based on 3d data provided by a RGB-D camera. We introduce a novel 3d feature descriptor ...

متن کامل

3d Object Segmentation of Point Clouds Using Profiling Techniques

In the automatic processing of point clouds, higher level information in the form of point segments is required for classification and object detection purposes. Point cloud segmentation allows for the definition of these segments. Various algorithms have been proposed for the segmentation of point clouds. The advancement of Lidar capabilities has resulted in the increase in volumes of data cap...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i2.25287